Logic Explained Networks

نویسندگان

چکیده

The large and still increasing popularity of deep learning clashes with a major limit neural network architectures, that consists in their lack capability providing human-understandable motivations decisions. In situations which the machine is expected to support decision human experts, comprehensible explanation feature crucial importance. language used communicate explanations must be formal enough implementable friendly understandable by wide audience. this paper, we propose general approach Explainable Artificial Intelligence case showing how mindful design networks leads family interpretable models called Logic Explained Networks (LENs). LENs only require inputs predicates, they provide terms simple First-Order (FOL) formulas involving such predicates. are cover number scenarios. Amongst them, consider directly as special classifiers being explainable, or when act additional role creating conditions for making black-box classifier explainable FOL formulas. Despite supervised problems mostly emphasized, also show can learn unsupervised settings. Experimental results on several datasets tasks may yield better classifications than established white-box models, trees Bayesian rule lists, while more compact meaningful explanations.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Gradients explode - Deep Networks are shallow - ResNet explained

Whereas it is believed that techniques such as Adam, batch normalization and, more recently, SeLU nonlinearities “solve” the exploding gradient problem, we show that this is not the case in general and that in a range of popular MLP architectures, exploding gradients exist and that they limit the depth to which networks can be effectively trained, both in theory and in practice. We explain why ...

متن کامل

Encoding Markov logic networks in Possibilistic Logic

Markov logic uses weighted formulas to compactly encode a probability distribution over possible worlds. Despite the use of logical formulas, Markov logic networks (MLNs) can be difficult to interpret, due to the often counter-intuitive meaning of their weights. To address this issue, we propose a method to construct a possibilistic logic theory that exactly captures what can be derived from a ...

متن کامل

Neuroplasticity explained by broad-scale networks and modularity?

The human brain is a formidably complex network, the seat of cognition and consciousness and many other remarkable features, including the capacities of growth, self-organisation, reorganisation and the ability to recover from significant damage. This combined dynamic capability is known as plasticity. Considerable neuro-reorganisation is a feature of the brain commonly thought to be restricted...

متن کامل

cluster-head election in wireless sensor networks using fuzzy logic

a wireless sensor network consists of many inexpensive sensor nodes that can be used toconfidently extract data from the environment .nodes are organized into clusters and in each cluster all non-cluster nodes transmit their data only to the cluster-head .the cluster-head transmits all received data to the base station .because of energy limitation in sensor nodes and energy reduction in each d...

متن کامل

Hybrid Markov Logic Networks

Markov logic networks (MLNs) combine first-order logic and Markov networks, allowing us to handle the complexity and uncertainty of real-world problems in a single consistent framework. However, in MLNs all variables and features are discrete, while most real-world applications also contain continuous ones. In this paper we introduce hybrid MLNs, in which continuous properties (e.g., the distan...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Artificial Intelligence

سال: 2023

ISSN: ['2633-1403']

DOI: https://doi.org/10.1016/j.artint.2022.103822